MT Quality Estimation: The CMU System for WMT'13
نویسندگان
چکیده
In this paper we present our entry to the WMT’13 shared task: Quality Estimation (QE) for machine translation (MT). We participated in the 1.1, 1.2 and 1.3 sub-tasks with our QE system trained on features from diverse information sources like MT decoder features, n-best lists, monoand bi-lingual corpora and giza training models. Our system shows competitive results in the workshop shared task.
منابع مشابه
CMU System Combination via Hypothesis Selection for WMT'10
This paper describes the CMU entry for the system combination shared task at WMT’10. Our combination method is hypothesis selection, which uses information from n-best lists from the input MT systems, where available. The sentence level features used are independent from the MT systems involved. Compared to the baseline we added source-to-target word alignment based features and trained system ...
متن کاملCMU System Combination for WMT'09
This paper describes the CMU entry for the system combination shared task at WMT’09. Our combination method is hypothesis selection, which uses information from n-best lists from several MT systems. The sentence level features are independent from the MT systems involved. To compensate for various n-best list sizes in the workshop shared task including firstbest-only entries, we normalize one o...
متن کاملCMU Syntax-Based Machine Translation at WMT 2011
We present the Carnegie Mellon University Stat-XFER group submission to the WMT 2011 shared translation task. We built a hybrid syntactic MT system for French–English using the Joshua decoder and an automatically acquired SCFG. New work for this year includes training data selection and grammar filtering. Expanded training data selection significantly increased translation scores and lowered OO...
متن کاملExploring Consensus in Machine Translation for Quality Estimation
This paper presents the use of consensus among Machine Translation (MT) systems for the WMT14 Quality Estimation shared task. Consensus is explored here by comparing the MT system output against several alternative machine translations using standard evaluation metrics. Figures extracted from such metrics are used as features to complement baseline prediction models. The hypothesis is that know...
متن کاملTen Years of WMT Evaluation Campaigns: Lessons Learnt
The WMT evaluation campaign (http://www.statmt.org/wmt16) has been run annually since 2006. It is a collection of shared tasks related to machine translation, in which researchers compare their techniques against those of others in the field. The longest running task in the campaign is the translation task, where participants translate a common test set with their MT systems. In addition to the...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2013